Entropy-Based Logic Explanations of Neural Networks

نویسندگان

چکیده

Explainable artificial intelligence has rapidly emerged since lawmakers have started requiring interpretable models for safety-critical domains. Concept-based neural networks arisen as explainable-by-design methods they leverage human-understandable symbols (i.e. concepts) to predict class memberships. However, most of these approaches focus on the identification relevant concepts but do not provide concise, formal explanations how such are leveraged by classifier make predictions. In this paper, we propose a novel end-to-end differentiable approach enabling extraction logic from using formalism First-Order Logic. The method relies an entropy-based criterion which automatically identifies concepts. We consider four different case studies demonstrate that: (i) enables distillation concise in domains clinical data computer vision; (ii) proposed outperforms state-of-the-art white-box terms classification accuracy.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Extracting Explanations from Neural Networks

The use of neural networks is still difficult in many application areas due to the lack of explanation facilities (the « black box » problem). An example of such applications is multiple criteria decision making (MCDM), applied to location problems having environmental impact. However, the concepts and methods presented are also applicable to other problem domains. These concepts show how to ex...

متن کامل

Constructing neural networks for multiclass-discretization based on information entropy

Cios and Liu (1992) proposed an entropy-based method to generate the architecture of neural networks for supervised two-class discretization. For multiclass discretization, the inter-relationship among classes is reduced to a set of binary relationships, and an independent two-class subnetwork is created for each binary relationship. This two-class-based method ends up with the disability of sh...

متن کامل

Transfer entropy-based feedback improves performance in artificial neural networks

The structure of the majority of modern deep neural networks is characterized by unidirectional feed-forward connectivity across a very large number of layers. By contrast, the architecture of the cortex of vertebrates contains fewer hierarchical levels but many recurrent and feedback connections. Here we show that a small, few-layer artificial neural network that employs feedback will reach to...

متن کامل

Pseudo-Entropy Based Pruning Algorithm for Feed forward Neural Networks

Design of artificial neural networks is an important and practical task:"how to choose the adequate size of neural architecture for a given application". One popular method to overcome this problem is to start with an oversized structure and then prune it to obtain simpler network with a good generalization performance. This paper presents a pruning algorithm based on pseudo-entropy of hidden n...

متن کامل

Optimization of Entropy with Neural Networks

OF THE DISSERTATION Optimization of Entropy with Neural Networks by Nicol Norbert Schraudolph Doctor of Philosophy in Cognitive Science and Computer Science University of California, San Diego, 1995 Professor Terrence J. Sejnowski, Chair The goal of unsupervised learning algorithms is to discover concise yet informative representations of large data sets; the minimum description length principl...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2022

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v36i6.20551